05. Challenges in RL Navigation
Challenges in Deep RL Navigation
The goal of autonomous navigation for robots using Deep RL technology is to provide an end-to-end, “pixels to actions”, solution. As the cost of high performance compute power and data storage has gone down, approaches using Deep reinforcement learning have risen in popularity within the AI community. Autonomous navigation using Deep RL is now possible in some constrained environments.
Motivations
The motivations for pursuing this approach over more conventional methods include:
- The importance of end-to-end mobile autonomy as an aspect of a larger goal: to develop general-purpose AI systems that can interact and learn from the world.
- The implicit connection between representation and actions in Deep RL networks: actions are learned together with the representation (the network), ensuring that task-relevant features are present in the representation.
- The deeper understanding of surroundings that can be achieved with RL agents: an agent can perform experiments to better “understand” its complex and changing environment, which leads to more nuanced and human-like behavior by the robot.
Challenges
The field of Deep RL is still in its infancy and an area of active research. Applying it to robotic navigation is complex and full of challenges in the real world. Current challenges include:
- Rewards that the RL algorithm needs are often sparsely distributed and hard to define in environments. The Mirowski paper described earlier provided auxiliary goals to partially address this problem.
- Environments often include dynamic elements and constant change. This means the agents must have memory of some sort on different time scales to remember: the goal location, the developing map based on visual observations, and longer term boundaries and cues from the environment.
- Training a robot through trial and error in simulation cannot fully prepare it for a real world environment. While training in a simulator costs time and compute power, it doesn’t involve the physical risk that occurs from failures with a real world mobile robot. Advances in photo-realistic generation of realistic simulation environments may hold promise in this area.
Staying abreast of the latest breakthroughs, reading academic papers on the topics, and experimenting with algorithms that address these challenges is challenging too! The ability to see the potential from breakthroughs, and apply them commercially is a valuable career skill that must be developed through practice. If you are interested in learning more about Deep RL, This recent survey paper covers a great deal of material. To experiment with new algorithms in a robotic environment, OpenAI Systems provides outstanding software and environments for learning and testing. In February 2018, OpenAI Gym released a number of robotic environments for experimentation… another great place to expand your knowledge and understanding!
References
- Arulkumaran, Kai, et al. "A brief survey of deep reinforcement learning." arXiv preprint arXiv:1708.05866 (2017).
- Kahn, Gregory, et al. "Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation." arXiv preprint arXiv:1709.10489 (2017).
- Zhu, Yuke, et al. "Target-driven visual navigation in indoor scenes using deep reinforcement learning." Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017.
- Ingredients for Robotic Research